利用TRIMAP引导和融合多级功能是具有像素级预测的基于Trimap的垫子的两个重要问题。为了利用Trimap指导,大多数现有方法只需将TRIMAPS和图像连接在一起,以馈送深网络或应用额外的网络以提取更多的TRIMAP指导,这符合效率和效率之间的冲突。对于新兴的基于内容的特征融合,大多数现有的消光方法仅关注本地特征,这些功能缺乏与有趣对象相关的强大语义信息的全局功能的指导。在本文中,我们提出了一种由我们的Trimap引导的非背景多尺度池(TMP)模块和全球本地背景信息融合(GLF)模块组成的Trimap-Goided Feats挖掘和融合网络。考虑到Trimap提供强大的语义指导,我们的TMP模块在Trimap的指导下对有趣的对象进行了有效的特征挖掘,而无需额外参数。此外,我们的GLF模块使用我们的TMP模块开采的有趣物体的全局语义信息,以指导有效的全局本地上下文感知多级功能融合。此外,我们建立了一个共同的有趣的物体消光(CIOM)数据集,以推进高质量的图像消光。在组合物-1K测试集,Alphamatting基准和我们的CIOM测试集上的实验结果表明,我们的方法优于最先进的方法。代码和模型将很快公开发布。
translated by 谷歌翻译
最先进的无监督的RE-ID方法使用基于内存的非参数软制AX丢失训练神经网络。存储在存储器中的实例特征向量通过群集和更新在实例级别中分配伪标签。然而,不同的簇大小导致每个群集的更新进度中的不一致。为了解决这个问题,我们呈现了存储特征向量的集群对比度,并计算群集级别的对比度损耗。我们的方法采用唯一的群集表示来描述每个群集,从而产生群集级存储字典。以这种方式,可以有效地保持聚类的一致性,在整个阶段,可以显着降低GPU存储器消耗。因此,我们的方法可以解决集群不一致的问题,并且适用于较大的数据集。此外,我们采用不同的聚类算法来展示我们框架的鲁棒性和泛化。与标准无监督的重新ID管道的集群对比的应用达到了9.9%,8.3%,12.1%的显着改善,而最新的无人纯粹无监督的重新ID方法和5.5%,4.8%,4.4%地图相比与市场,公爵和MSMT17数据集上的最先进的无监督域适应重新ID方法相比。代码可在https://github.com/alibaba/cluster-contrast获得。
translated by 谷歌翻译
探讨了语言建模流行的变形金刚,用于近期解决视觉任务,例如,用于图像分类的视觉变压器(VIT)。 VIT模型将每个图像分成具有固定长度的令牌序列,然后应用多个变压器层以模拟它们的全局关系以进行分类。然而,当从像想象中的中型数据集上从头开始训练时,VIT对CNNS达到较差的性能。我们发现它是因为:1)输入图像的简单标记未能模拟相邻像素之间的重要局部结构,例如边缘和线路,导致训练采样效率低。 2)冗余注意骨干骨干设计对固定计算预算和有限的训练样本有限的具有限制性。为了克服这些限制,我们提出了一种新的令牌到令牌视觉变压器(T2T-VIT),它包含1)层 - 明智的代币(T2T)转换,通过递归聚合相邻来逐步地结构于令牌到令牌。代币进入一个令牌(令牌到令牌),这样可以建模由周围令牌所代表的本地结构,并且可以减少令牌长度; 2)一种高效的骨干,具有深度狭窄的结构,用于在实证研究后CNN建筑设计的激励变压器结构。值得注意的是,T2T-VIT将Vanilla Vit的参数计数和Mac减少了一半,同时从想象中从头开始训练时,改善了超过3.0 \%。它还优于Endnets并通过直接培训Imagenet训练来实现与MobileNets相当的性能。例如,T2T-VTO与Reset50(21.5M参数)的可比大小(21.5M参数)可以在图像分辨率384 $ \ Times 384上实现83.3 \%TOP1精度。 (代码:https://github.com/yitu-opensource/t2t-vit)
translated by 谷歌翻译
Graph Neural Networks (GNNs) have shown satisfying performance on various graph learning tasks. To achieve better fitting capability, most GNNs are with a large number of parameters, which makes these GNNs computationally expensive. Therefore, it is difficult to deploy them onto edge devices with scarce computational resources, e.g., mobile phones and wearable smart devices. Knowledge Distillation (KD) is a common solution to compress GNNs, where a light-weighted model (i.e., the student model) is encouraged to mimic the behavior of a computationally expensive GNN (i.e., the teacher GNN model). Nevertheless, most existing GNN-based KD methods lack fairness consideration. As a consequence, the student model usually inherits and even exaggerates the bias from the teacher GNN. To handle such a problem, we take initial steps towards fair knowledge distillation for GNNs. Specifically, we first formulate a novel problem of fair knowledge distillation for GNN-based teacher-student frameworks. Then we propose a principled framework named RELIANT to mitigate the bias exhibited by the student model. Notably, the design of RELIANT is decoupled from any specific teacher and student model structures, and thus can be easily adapted to various GNN-based KD frameworks. We perform extensive experiments on multiple real-world datasets, which corroborates that RELIANT achieves less biased GNN knowledge distillation while maintaining high prediction utility.
translated by 谷歌翻译
To generate high quality rendering images for real time applications, it is often to trace only a few samples-per-pixel (spp) at a lower resolution and then supersample to the high resolution. Based on the observation that the rendered pixels at a low resolution are typically highly aliased, we present a novel method for neural supersampling based on ray tracing 1/4-spp samples at the high resolution. Our key insight is that the ray-traced samples at the target resolution are accurate and reliable, which makes the supersampling an interpolation problem. We present a mask-reinforced neural network to reconstruct and interpolate high-quality image sequences. First, a novel temporal accumulation network is introduced to compute the correlation between current and previous features to significantly improve their temporal stability. Then a reconstruct network based on a multi-scale U-Net with skip connections is adopted for reconstruction and generation of the desired high-resolution image. Experimental results and comparisons have shown that our proposed method can generate higher quality results of supersampling, without increasing the total number of ray-tracing samples, over current state-of-the-art methods.
translated by 谷歌翻译
Panoptic Part Segmentation (PPS) unifies panoptic segmentation and part segmentation into one task. Previous works utilize separated approaches to handle thing, stuff, and part predictions without shared computation and task association. We aim to unify these tasks at the architectural level, designing the first end-to-end unified framework named Panoptic-PartFormer. Moreover, we find the previous metric PartPQ biases to PQ. To handle both issues, we make the following contributions: Firstly, we design a meta-architecture that decouples part feature and things/stuff feature, respectively. We model things, stuff, and parts as object queries and directly learn to optimize all three forms of prediction as a unified mask prediction and classification problem. We term our model as Panoptic-PartFormer. Secondly, we propose a new metric Part-Whole Quality (PWQ) to better measure such task from both pixel-region and part-whole perspectives. It can also decouple the error for part segmentation and panoptic segmentation. Thirdly, inspired by Mask2Former, based on our meta-architecture, we propose Panoptic-PartFormer++ and design a new part-whole cross attention scheme to further boost part segmentation qualities. We design a new part-whole interaction method using masked cross attention. Finally, the extensive ablation studies and analysis demonstrate the effectiveness of both Panoptic-PartFormer and Panoptic-PartFormer++. Compared with previous Panoptic-PartFormer, our Panoptic-PartFormer++ achieves 2% PartPQ and 3% PWQ improvements on the Cityscapes PPS dataset and 5% PartPQ on the Pascal Context PPS dataset. On both datasets, Panoptic-PartFormer++ achieves new state-of-the-art results with a significant cost drop of 70% on GFlops and 50% on parameters. Our models can serve as a strong baseline and aid future research in PPS. Code will be available.
translated by 谷歌翻译
An increasing number of public datasets have shown a marked clinical impact on assessing anatomical structures. However, each of the datasets is small, partially labeled, and rarely investigates severe tumor subjects. Moreover, current models are limited to segmenting specific organs/tumors, which can not be extended to novel domains and classes. To tackle these limitations, we introduce embedding learned from Contrastive Language-Image Pre-training (CLIP) to segmentation models, dubbed the CLIP-Driven Universal Model. The Universal Model can better segment 25 organs and 6 types of tumors by exploiting the semantic relationship between abdominal structures. The model is developed from an assembly of 14 datasets with 3,410 CT scans and evaluated on 6,162 external CT scans from 3 datasets. We rank first on the public leaderboard of the Medical Segmentation Decathlon (MSD) and achieve the state-of-the-art results on Beyond The Cranial Vault (BTCV). Compared with dataset-specific models, the Universal Model is computationally more efficient (6x faster), generalizes better to CT scans from varying sites, and shows stronger transfer learning performance on novel tasks. The design of CLIP embedding enables the Universal Model to be easily extended to new classes without catastrophically forgetting the previously learned classes.
translated by 谷歌翻译
This paper illustrates the technologies of user next intent prediction with a concept knowledge graph. The system has been deployed on the Web at Alipay, serving more than 100 million daily active users. Specifically, we propose AlipayKG to explicitly characterize user intent, which is an offline concept knowledge graph in the Life-Service domain modeling the historical behaviors of users, the rich content interacted by users and the relations between them. We further introduce a Transformer-based model which integrates expert rules from the knowledge graph to infer the online user's next intent. Experimental results demonstrate that the proposed system can effectively enhance the performance of the downstream tasks while retaining explainability.
translated by 谷歌翻译
Medical image segmentation (MIS) is essential for supporting disease diagnosis and treatment effect assessment. Despite considerable advances in artificial intelligence (AI) for MIS, clinicians remain skeptical of its utility, maintaining low confidence in such black box systems, with this problem being exacerbated by low generalization for out-of-distribution (OOD) data. To move towards effective clinical utilization, we propose a foundation model named EvidenceCap, which makes the box transparent in a quantifiable way by uncertainty estimation. EvidenceCap not only makes AI visible in regions of uncertainty and OOD data, but also enhances the reliability, robustness, and computational efficiency of MIS. Uncertainty is modeled explicitly through subjective logic theory to gather strong evidence from features. We show the effectiveness of EvidenceCap in three segmentation datasets and apply it to the clinic. Our work sheds light on clinical safe applications and explainable AI, and can contribute towards trustworthiness in the medical domain.
translated by 谷歌翻译
Depression is a leading cause of death worldwide, and the diagnosis of depression is nontrivial. Multimodal learning is a popular solution for automatic diagnosis of depression, and the existing works suffer two main drawbacks: 1) the high-order interactions between different modalities can not be well exploited; and 2) interpretability of the models are weak. To remedy these drawbacks, we propose a multimodal multi-order factor fusion (MMFF) method. Our method can well exploit the high-order interactions between different modalities by extracting and assembling modality factors under the guide of a shared latent proxy. We conduct extensive experiments on two recent and popular datasets, E-DAIC-WOZ and CMDC, and the results show that our method achieve significantly better performance compared with other existing approaches. Besides, by analyzing the process of factor assembly, our model can intuitively show the contribution of each factor. This helps us understand the fusion mechanism.
translated by 谷歌翻译